deepmind team
AlphaTensor: AI system speeds up matrix multiplication with new algorithm
With AlphaTensor, DeepMind Technologies has presented an AI system that is supposed to independently find novel, efficient and provably correct algorithms for complex mathematical tasks. AlphaTensor has already identified a new algorithm with which matrix multiplications can be carried out faster than before, as the research team explains in a paper published in the magazine Nature. The team gives the factor as a ten to twenty percent acceleration compared to previous standard methods. AlphaTensor builds on AlphaZero, an AI agent that proved superior to human players in board games such as Chess, Go and Shogi. Founded by British AI researcher, neuroscientist and computer game developer Demosthenes "Demis" Hassabis, the company had already set milestones with AlphaGo and AlphaFold.
- North America > United States > California (0.05)
- Europe > Netherlands > South Holland > Leiden (0.05)
- Europe > Germany > Saarland > Saarbrücken (0.05)
Apple's former machine learning director reportedly joins Google's DeepMind team
An Apple executive who oversaw Apple's machine learning and artificial intelligence efforts has left the company in recent weeks, citing its stringent return-to-office policy, according to Bloomberg. Ian Goodfellow is now reportedly joining Google's DeepMind team as an individual contributor, a few years after he left the tech giant for Apple. Based on his LinkedIn profile, Goodfellow worked in different capacities for Google since 2013, including as a research scientist and as a software engineering intern. Bloomberg says the former Apple exec referenced the policy in a note about his departure addressed to staff members. In April, Apple announced that it was going to start implementing its return-to-office policy on May 23rd and will be requiring employees to work in its offices at least three times a week.
DeepMind Open-Sources Quantum Chemistry AI Model DM21
Researchers at Google subsidiary DeepMind have open-sourced DM21, a neural network model for mapping electron density to chemical interaction energy, a key component of quantum mechanical simulation. DM21 outperforms traditional models on several benchmarks and is available as an extension to the PySCF simulation framework. The model was described in an article published in Science. DM21 uses a neural network to approximate the energy density functional component of Density Functional Theory (DFT), which describes the quantum mechanical behavior of molecules. DM21 addresses systemic problems with previous functional approximations, which cannot correctly handle systems with "fractional electron character."
Google Trains 280 Billion Parameter AI Language Model Gopher
Google subsidiary DeepMind announced Gopher, a 280-billion-parameter AI natural language processing (NLP) model. Based on the Transformer architecture and trained on a 10.5TB corpus called MassiveText, Gopher outperformed the current state-of-the-art on 100 of 124 evaluation tasks. The model and several experiments were described in a paper published on arXiv. As part of their research effort in general AI, the DeepMind team trained Gopher and several smaller models to explore the strengths and weaknesses of large language models (LLMs). In particular, the researchers identified tasks where increased model scale led to improved accuracy, such as reading comprehension and fact-checking, as well as those where it did not, such as logical and mathematical reasoning.
DeepMind's AI Helped Crack Two Mathematical Puzzles That Stumped Humans for Decades
With his telescope, Galileo gathered a vast trove of observations on celestial objects. With his mind, he found patterns in that universe of data, creating theories on motion and mechanics that paved the way for modern science. Using AI, DeepMind just gave mathematicians a new telescope. Working with two teams of mathematicians, DeepMind engineered an algorithm that can look across different mathematical fields and spot connections that previously escaped the human mind. The AI doesn't do all the work--when fed sufficient data, it finds patterns.
Without Code for DeepMind's Protein AI, One Lab Wrote Its Own
For biologists who study the structure of proteins, the recent history of their field is divided into two epochs: before CASP14, the 14th biennial round of the Critical Assessment of Protein Structure conference, and after. In the decades before, scientists had spent years slowly chipping away at the problem of how to predict the structure of a protein from the sequence of amino acids that it comprises. After CASP14, which took place in December 2020, the problem had effectively been solved, by researchers at the Google subsidiary DeepMind. A research company focused on a branch of artificial intelligence known as "deep learning," DeepMind had previously made headlines by building an AI system that beat the Go world champion. But their success at protein structure prediction, which they achieved using a neural network they call AlphaFold2, represented the first time they had built a model that could solve a problem of real scientific relevance.
DeepMind AI Predicts Protein Structure
If you are even remotely interested in science, you will have probably already heard about DeepMind's latest leap. Their AI system Alphafold 2 has cracked predicting proteins' 3D structure. There are plenty of great articles about it. Since I have written about machine learning/AI in an earlier series of posts, I decided to write a brief post about this development as well. For more details, do check the Nature/New Scientist/DeepMind articles linked above.
DeepMind Research Introduces Algorithms for Causal Reasoning in Probability Trees
For cutting-edge AI researchers looking for clean semantics models to represent the context-specific causal dependencies essential for causal induction, this DeepMind's algorithm encourages you to look at good old-fashioned probability trees. The probability tree diagram is used to represent a probability space. Tree diagrams illustrate a series of independent events or conditional probabilities. The Node on the probability tree diagram represents an event, and it's probability. The root node represents a particular event where probability equals one.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.80)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.80)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Model-Based Reasoning (0.54)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.39)
DeepMind Introduces Algorithms for Causal Reasoning in Probability Trees
Are you a cutting-edge AI researcher looking for models with clean semantics that can represent the context-specific causal dependencies necessary for causal induction? If so, maybe you should take a look at good old-fashioned probability trees. Probability trees may have been around for decades, but they have received little attention from the AI and ML community. "Probability trees are one of the simplest models of causal generative processes," explains the new DeepMind paper Algorithms for Causal Reasoning in Probability Trees, which the authors say is the first to propose concrete algorithms for causal reasoning in discrete probability trees. Humans naturally learn to reason in large part through inducing causal relationships from our observations, and we do this remarkably well, cognitive scientists say. Even when the data we perceive is sparse and limited, humans can quickly learn causal structures such as interactions between physical objects, observations of the co-occurrence frequencies between causes and effects, etc. Causal induction is also a classic problem in statistics and machine learning.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Model-Based Reasoning (0.91)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
An IQ Test Proves That Neural Networks Are Capable of Abstract Reasoning
Using those primitives, DeepMind generated a dataset known as Procedurally Generated Matrices(PGM) that consists of triplets [progression, shape, color]. The relationship between the attributes in a triplet represent an abstract challenge. For instance, if the first attribute is progression, the values of the other two attributes must along rows or columns in the matrix. In order to show signs of abstract reasoning using PGM, a neural network must be able to explicitly compute relatioships between different matrix images and evaluate the viability of each potential answer in parallel. To address this challenge, the DeepMind team created a new neural network architecture called Wild Relation Network(WReN) in recognition of John Rave's wife Mary Wild who was also a contributor to the original IQ Test. In the WReN architecture, a convolutional neural network(CNN) processes each context panel and an individual answer choice panel independently to produce 9 vector embeddings. This set of embeddings is then passed to an recurrent network, whose output is a single sigmoid unit encoding the "score" for the associated answer choice panel.